A Status Report on Research in Transparent Informed Prefetching (CMU-CS-93-113)

نویسندگان

  • R. Hugo Patterson
  • Garth A. Gibson
  • M. Satyanarayanan
چکیده

This paper focuses on extending the power of caching and prefetching to reduce file read latencies by exploiting application level hints about future I/O accesses. We argue that systems that disclose high-level knowledge can transfer optimization information across module boundaries in a manner consistent with sound software engineering principles. Such Transparent Informed Prefetching (TIP) systems provide a technique for converting the high throughput of new technologies such as disk arrays and log-structured file systems into low latency for applications. Our preliminary experiments show that even without a high-throughput I/O subsystem TIP yields reduced execution time of up to 30% for applications obtaining data from a remote file server and up to 13% for applications obtaining data from a single local disk. These experiments indicate that greater performance benefits will be available when TIP is integrated with low level resource management policies and highly parallel I/O subsystems such as disk arrays.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Trace-Driven Comparison of Algorithms for Parallel Prefetching and Caching (CMU-CS-96-174)

High-performance I/O systems depend on prefetching and caching in order to deliver good performance to applications. These two techniques have generally been considered in isolation, even though there are signi cant interactions between them; a block prefetched too early reduces the e ectiveness of the cache, while a block cached too long reduces the effectiveness of prefetching. In this paper ...

متن کامل

Improving Index Performance through Prefetching (CMU-CS-00-177)

This paper proposes and evaluates Prefetching B-Trees (pB-Trees), which use prefetching to accelerate two important operations on B-Tree indices: searches and range scans. To accelerate searches, pB-Trees use prefetching to e ectively create wider nodes than the natural data transfer size: e.g., eight vs. one cache lines or disk pages. These wider nodes reduce the height of the B-Tree, thereby ...

متن کامل

Preparation and Biomedical properties of transparent chitosan/gelatin/honey/aloe vera nanocomposite

Objective(s): Biodegradable polymers are featured with notable potentials for biotechnology and bioengineering purposes. Still, there are limitations in their applicability so that in many cases composite forms are used. The present study is focused on chitosan (CS), gelatin (GEL), honey (H) and aloe vera (AV) for preparation of thin films.Methods: To prepare the thin film, CS and GEL wit...

متن کامل

Affinity Scheduling in Staged Server Architectures (CMU-CS-02-113)

Modern servers typically process request streams by assigning a worker thread to a request, and rely on a round robin policy for context-switching. Although this programming paradigm is intuitive, it is oblivious to the execution state and ignores each software module’s affinity to the processor caches. As a result, resumed threads of execution suffer additional delays due to conflict and compu...

متن کامل

Using Transparent Informed Prefetching (TIP) to Reduce File Read Latency

No current solution fully addresses read latency TIP to reduce latency • exploits high-level hints that don't violate modularity • converts throughput to latency Preliminary TIP test results As processor performance gains continue to outstrip Input/Output gains, I/O performance is becoming critical to overall system performance. File read latency is the most significant bottleneck for high perf...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015